Existing question answering methods infer answers either from a knowledgebase or from raw text. While knowledge base (KB) methods are good at answeringcompositional questions, their performance is often affected by theincompleteness of the KB. Au contraire, web text contains millions of factsthat are absent in the KB, however in an unstructured form. {\it Universalschema} can support reasoning on the union of both structured KBs andunstructured text by aligning them in a common embedded space. In this paper weextend universal schema to natural language question answering, employing\emph{memory networks} to attend to the large body of facts in the combinationof text and KB. Our models can be trained in an end-to-end fashion onquestion-answer pairs. Evaluation results on \spades fill-in-the-blank questionanswering dataset show that exploiting universal schema for question answeringis better than using either a KB or text alone. This model also outperforms thecurrent state-of-the-art by 8.5 $F_1$ points.\footnote{Code and data availablein \url{https://rajarshd.github.io/TextKBQA}}
展开▼